Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
You're currently offline. Some features may not work.
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
🗺️ Region Inference
Memory Safety, Lifetime Analysis, MLKit, Region Types
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
82963
posts in
830.1
ms
Towards Worst-Case
Guarantees
with Scale-Aware
Interpretability
arxiv.org
·
1d
🪜
Recursive Descent
feldera/feldera
: The
Feldera
Incremental
Computation Engine
github.com
·
1h
🚂
Cranelift IR
How we cut
Vertex
AI latency by 35% with
GKE
Inference Gateway
cloud.google.com
·
17h
📊
Profilers
Understanding LLM Inference
Engines
: Inside
Nano-vLLM
(Part 2)
neutree.ai
·
20h
·
Discuss:
Hacker News
🔄
Subinterpreters
Fast
Autoscheduling
for Sparse ML
Frameworks
ajroot.pl
·
2d
·
Discuss:
Hacker News
🚀
MLton
Grokking
: Generalization Beyond
Overfitting
on Small Algorithmic Datasets
dev.to
·
12h
·
Discuss:
DEV
🌱
Minimal ML
Hello Edge: Keyword
Spotting
on
Microcontrollers
paperium.net
·
14h
·
Discuss:
DEV
🔌
Microcontrollers
Is Your Machine Learning
Pipeline
as Efficient as it Could Be?
kdnuggets.com
·
22h
🚀
MLton
Continual
learning and the post
monolith
AI era
baseten.co
·
12h
·
Discuss:
Hacker News
🧠
Memory Models
Kubernetes Operator for automated
Jupyter
Notebook validation in
MLOps
pipelines
reddit.com
·
11h
·
Discuss:
r/kubernetes
✅
Configuration Validation
ML-LIB
: Machine Learning Library Proposed For The Linux Kernel
phoronix.com
·
15h
·
Discuss:
Hacker News
🌱
Minimal ML
Determining
Energy Efficiency Sweet
Spots
in Production LLM Inference
arxiv.org
·
1d
💓
Live Variable Analysis
Deterministic AI:
Reclaiming
Predictable Latency with Rust and Zero-Cost
Abstractions
dev.to
·
21h
·
Discuss:
DEV
🦀
MIR Optimization
LLM Inference
Benchmarking
-
Measure
What Matters
digitalocean.com
·
20h
⚡
Performance
Build a
Compiler
in Five Projects
kmicinski.com
·
1h
🎭
Racket Modules
Finding the needle in the
logstack
: Reducing LLM context with
TF-IDF
eliseomartelli.it
·
1d
🏗️
MLIR
Why Large Language Models Make Terrible
Compilers
— And Why the Industry Keeps Trying
Anyway
webpronews.com
·
9h
🏗️
LLVM
Hybrid
Cryo
‑EM‑Graph Neural Network for Rapid Prediction of Protein‑Protein Interaction Free‑Energy
Landscapes
**Abstract** Quantitative characterization of ...
freederia.com
·
23h
📋
JSON Parsing
I run local LLMs daily, but I'll never trust them for these
tasks
xda-developers.com
·
20h
🏰
Capability Machines
Examining
Turbopuffer
ANN v3
terencezl.github.io
·
1d
·
Discuss:
Hacker News
⚡
Tokenizer Optimization
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help